Advanced Lane Finding Project (only for Images)

The goals / steps of this project are the following:

1. Camera Calibration

To start, we have to calibrate the camera by deleting the distortion because of lens form. To recognize the distorsion of the lenses, we take pictures of a chess board from different points of view. Then, each picture (nodes) are analyzed and a matrix is computed. This matrix would undistort any picture taken with this camera.

1.1 Read Picture and Extract Node Position

I start by preparing "object points", which will be the (x, y, z) coordinates of the chessboard corners in the world. Here I am assuming the chessboard is fixed on the (x, y) plane at z=0, such that the object points are the same for each calibration image. Thus, objp is just a replicated array of coordinates, and objpoints will be appended with a copy of it every time I successfully detect all chessboard corners in a test image. imgpoints will be appended with the (x, y) pixel position of each of the corners in the image plane with each successful chessboard detection.

1.2 Get and Save Matrix, and Testing of the matrix

I then used the output objpoints and imgpoints to compute the camera calibration and distortion coefficients using the cv2.calibrateCamera() function. I applied this distortion correction to the test image using the cv2.undistort() function. Also, the undistort matrix and distortion coefficient are save on dist_pickle, to be use on any picture taken by the same camera.

2. Distort Correction on Raws Images

using the matrix and distance computed before, I will undistort all pictures

2.1. Test on one Raw Image

2.2. Transforming all pictures and saving them

3. Thresholded binary image

3.1. Analyze of best Layers

3.1.1. Which is the best Layer on RGB? ---> R is here the best layer

Result: Red is the best RGB Layer in all pictures, because it shows the road line whiter

3.1.2. Which is the best Layer on HLS? ---> S is here the best layer

Result: S ist the best layer when lighting is too much. However, L is better for stripped and small lines, for example: future lines

3.1.3. Which is the best Layer on HSV?---> V is here the best layer

Result: V Channel is the best layer here

Result: S is the best layer.

3.1.4. Which is the second best layer? (R vs L vs V) --> searching for the best chanell for small lines

Result: R is the best line. R > V >> L

Result: R is the second best layer.

3.2. Threshold Tunning

3.2.1. Sorbel X with threshold (with R Channel)

R-Channel is used for Sorbel X. After that a normalization to 255 is done. Then, relevant threshold is applied.

For harder_challenge video, there are many trees nearby the road, which are not filtered for the R-channel or Sobel Threshold. Because of that, I use the green color of H-Channel and subtract green pixels from image result.

sx_thresh_min: 30 is too much, it delete some real road lines. 10 is too less, to much noise is present on the pictures.

sx_thresh_max: 100 is enough

3.2.2. S-Channel with combined Threshold

I tried two approaches:

One is only S-Channel with threshold. By tunning, I realized that it is a compromise between seeing road lines and dont seeing shadows (which would cause false fitted lines).

Second is the S-channel with threshold + R threshold. To be able to increase the threshold min, shadows have to be erase of the picture. Shadows and also breaks are dark pixels on R-Channel. When a pixel on R-Channel is under sr_thresh, it would result on 0, even though it would pass the S-Threshold.

I would compare both approaches.

SR_thresholded is better on these test images. On the background, there is relevant information not filtered, but it would be cutted con the view transformation.

3.3 Final Threshold Binary Function

4. Birds-eye view Transform

4.1. Search Points in Straight Lines

4.2. Transformation Function

4.3. Testing the Transformation Function

5. Detect the lane pixels

5.1 Complex Line Detection

The image is divided by the half, and the histogram is calculated. The maximum value of each histogram should represent the start of the line. Then, the line will be searched in pieces.

This should be done for:

left_fit and right_fit are the polynomial coefficients a,b and c from ay^2+by+c = x

left_fit_x and right_fit_x are the list with line points in x axis. ploty is the list in y axis

5.2. Search from Prior

Knowing where the line was before, it would search for valid pixels nearby and compute the new line.

6. Back onto the original image with visual output

Transformed the line from warped image into undistorted view. Then combine this line with original undistort image

7. Determine the Curvature and Vehicle position

7.1. Common Variables

7.2. Curvature Calculation

7.3 Determine vehicle position

with position to the center. Assumption: the camera is positionated in the center of the car.

10. Display radius and vehicle position

11. Pipeline (Video)

see P2-Videos.ipynb

Sanity_check:

This function check the plausibility of the fitted line (see P2-Videos.ipynb)

Video results:

Video result in folder ./output_videos

Here 's a link to my video result.

Here 's a link to my video tunning result

Here 's a link to my video tunning result for challenge video

Here 's a link to my video tunning result for harder challenge video. Here is the pipeline nicht enough and bigger adjustment should be done.

Discussion

1. Briefly discuss any problems / issues you faced in your implementation of this project.

Where will your pipeline likely fail?

What could you do to make it more robust?